Goto

Collaborating Authors

 robot performance


Demonstration Based Explainable AI for Learning from Demonstration Methods

Gu, Morris, Croft, Elizabeth, Kulic, Dana

arXiv.org Artificial Intelligence

Abstract--Learning from Demonstration (LfD) is a powerful type of machine learning that can allow novices to teach and program robots to complete various tasks. However, the learning process for these systems may still be difficult for novices to interpret and understand, making effective teaching challenging. Explainable artificial intelligence (XAI) aims to address this challenge by explaining a system to the user. In this work, we investigate XAI within LfD by implementing an adaptive explanatory feedback system on an inverse reinforcement learning (IRL) algorithm. The feedback is implemented by demonstrating selected learnt trajectories to users. The system adapts to user teaching by categorizing and then selectively sampling trajectories shown to a user, to show a representative sample of both successful and unsuccessful trajectories. The system was evaluated through a user study with 26 participants teaching a robot a navigation task. The results of the user study demonstrated that the proposed explanatory feedback system can improve robot performance, teaching efficiency and user understanding of the robot.


Towards Inferring Users' Impressions of Robot Performance in Navigation Scenarios

Zhang, Qiping, Tsoi, Nathan, Choi, Booyeon, Tan, Jie, Chiang, Hao-Tien Lewis, Vázquez, Marynel

arXiv.org Artificial Intelligence

Human impressions of robot performance are often measured through surveys. As a more scalable and cost-effective alternative, we study the possibility of predicting people's impressions of robot behavior using non-verbal behavioral cues and machine learning techniques. To this end, we first contribute the SEAN TOGETHER Dataset consisting of observations of an interaction between a person and a mobile robot in a Virtual Reality simulation, together with impressions of robot performance provided by users on a 5-point scale. Second, we contribute analyses of how well humans and supervised learning techniques can predict perceived robot performance based on different combinations of observation types (e.g., facial, spatial, and map features). Our results show that facial expressions alone provide useful information about human impressions of robot performance; but in the navigation scenarios we tested, spatial features are the most critical piece of information for this inference task. Also, when evaluating results as binary classification (rather than multiclass classification), the F1-Score of human predictions and machine learning models more than doubles, showing that both are better at telling the directionality of robot performance than predicting exact performance ratings. Based on our findings, we provide guidelines for implementing these predictions models in real-world navigation scenarios.


Video Friday: Robots With Airbags, Drone vs. Drone, and MIT's Jumping Cube

IEEE Spectrum Robotics

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We'll also be posting a weekly calendar of upcoming robotics events for the next two months; here's what we have so far (send us your events!): Enjoy today's videos, and let us know if you have suggestions for next week. This is one of the best things I have ever seen. In this video we present a new safety module for robots to ensure safety for different tools in collaborative tasks. This module, filled with air pressure during the robot motion, covers mounted tools and carried workpieces.


Katsushi Ikeuchi: e-Intangible Heritage CMU RI Seminar

Robohub

Abstract: "Tangible heritage, such as temples and statues, is disappearing day-by-day due to human and natural disaster. In e-tangible heritage, such as folk dances, local songs, and dialects, has the same story due to lack of inheritors and mixing cultures. We have been developing methods to preserve such tangible and in-tangible heritage in the digital form. This project, which we refer to as e-Heritage, aims not only record heritage, but also analyzes those recorded data for better understanding as well as displays those data in new forms for promotion and education. This talk consists of three parts. The first part briefly covers e-Tangible heritage, in particular, our projects in Cambodia and Kyushu. Here I emphasize not only challenge in data acquisition but also the importance to create the new aspect of science, Cyber-archaeology, which allows us to have new findings in archaeology, based on obtained digital data. The second part covers how to display a Japanese folk dance by the performance of a humanoid robot. Here, we follow the paradigm, learning-from-observation, in which a robot learns how to perform a dance from observing a human dance performance. Due to the physical difference between a human and a robot, the robot cannot exactly mimic the human actions. Instead, the robot first extracts important actions of the dance, referred to key poses, and then symbolically describes them using Labanotation, which the dance community has been using for recording dances. Finally, this labanotation is mapped to each different robot hardware for reconstructing the original dance performance. The third part tries to answer the question, what is the merit to preserve folk dances by using robot performance by the answer that such symbolic representations for robot performance provide new understandings of those dances. In order to demonstrate this point, we focus on folk dances of native Taiwanese, which consists of 14 different tribes. We have converted those folk dances into Labanotation for robot performance. Further, by analyzing these Labanotations obtained, we can clarify the social relations among these 14 tribes."